29 research outputs found

    Análisis del software libre como herramienta de producción

    Get PDF
    En este artículo se muestran datos cuantitativos que permiten tener una idea más clara al momento de la evaluación en la adquisición de software y la implementación de código abierto y software libre. En él se examinan variables como el mercado objetivo, la fiabilidad, el rendimiento, la escalabilidad, seguridad y el costo de adquisición al momento de mirar al código abierto como alternativa viable con respecto al software propietario desde una perspectiva objetiva teniendo como base estudios a fondo a nivel mundial de aquellos factores que influyen en la toma de decisiones en todos los niveles de implementación del software de todo tipo

    Linux ubuntu server

    Get PDF
    Existen en la actualidad gran variedad de distribuciones libres del Sistema Operativo Linux, el cual ha ganado un espacio preponderante por sus características de multiusuario, multitarea, estabilidad, seguridad, conectividad, escalabilidad y compatibilidad con gran variedad de aplicaciones. Una de las distribuciones más usadas en diferentes ámbitos, entre ellos el científico, académico, industrial y comercial, es la distribución UBUNTU, ésta ha sido patrocinada por la empresa Canonical Ltda, organización británica propiedad del sudafricano Mark Shuttleworth. UBUNTU posee múltiples herramientas de configuración de servicios tales como DHCP (Dynamic Host Configuration Protocol), DNS (Domain Name System), LDAP y SAMBA, PROXY y el servidor WEB APACHE, entre otros. Por ello su funcionalidad es bastante amplia en lo referente a procesos de configuración de servicios para estaciones de trabajo y servidores. Organización británica propiedad del sudafricano Mark Shuttleworth. UBUNTU posee múltiples herramientas de configuración de servicios tales como DHCP (Dynamic Host Configuration Protocol), DNS (Domain Name System), LDAP y SAMBA, PROXY y el servidor WEB APACHE, entre otros. Por ello su funcionalidad es bastante amplia en lo referente a procesos de configuración de servicios para estaciones de trabajo y servidores

    PCA filtering and probabilistic SOM for network intrusion detection

    Get PDF
    The growth of the Internet and, consequently, the number of interconnected computers, has exposed significant amounts of information to intruders and attackers. Firewalls aim to detect violations according to a predefined rule-set and usually block potentially dangerous incoming traffic. However, with the evolution of attack techniques, it is more difficult to distinguish anomalies from normal traffic. Different detection approaches have been proposed, including the use of machine learning techniques based on neural models such as Self-Organizing Maps (SOMs). In this paper, we present a classification approach that hybridizes statistical techniques and SOM for network anomaly detection. Thus, while Principal Component Analysis (PCA) and Fisher Discriminant Ratio (FDR) have been considered for feature selection and noise removal, Probabilistic Self-Organizing Maps (PSOM) aim to model the feature space and enable distinguishing between normal and anomalous connections

    Navegador ontológico matemático-NOMAT

    Get PDF
    The query algorithms in search engines use indexing, contextual analysis and ontologies, among other techniques, for text search. However, they do not use equations due to their writing complexity. NOMAT is a prototype of mathematical expression search engine that seeks information both in thesaurus and internet, using ontological tool for filtering and contextualizing information and LaTeX editor for the symbols in these expressions. This search engine was created to support mathematical research. Compared to other Internet search engines, NOMAT does not require prior knowledge of LaTeX, because has an editing tool which enables writing directly the symbols that make up the mathematical expression of interest. The results obtained were accurate and contextualized, compared to other commercial and no-commercial search engines.Los algoritmos de consulta de los motores de búsqueda utilizan indexación, análisis contextual y ontologías, entre otras técnicas, para la búsqueda de texto. Sin embargo, no utilizan ecuaciones debido a su complejidad de escritura. Nomat es un prototipo de motor de búsqueda de expresión matemática que busca información tanto en tesauro como en Internet, utilizando la Herramienta ontológica para filtrar y contextualizar información y editor de látex para los símbolos de estas expresiones. Este buscador fue creado para apoyar la investigación matemática. En comparación con otros motores de búsqueda de Internet, Nomat no requiere conocimientos previos de látex, ya que cuenta con una herramienta de edición que permite escribir directamente los símbolos que componen la expresión matemática de interés. Los resultados obtenidos fueron precisos y contextualizados, en comparación con otros motores de búsqueda comerciales y no comerciales

    Application of feast (Feature Selection Toolbox) in ids (Intrusion detection Systems)

    Get PDF
    Security in computer networks has become a critical point for many organizations, but keeping data integrity demands time and large economic investments, in consequence there has been several solution approaches between hardware and software but sometimes these has become inefficient for attacks detection. This paper presents research results obtained implementing algorithms from FEAST, a Matlab Toolbox with the purpose of selecting the method with better precision results for different attacks detection using the least number of features. The Data Set NSL-KDD was taken as reference. The Relief method obtained the best precision levels for attack detection: 86.20%(NORMAL), 85.71% (DOS), 88.42% (PROBE), 93.11%(U2R), 90.07(R2L), which makes it a promising technique for features selection in data network intrusions

    Feature selection by multi-objective optimization: application to network anomaly detection by hierarchical self-organizing maps.

    Get PDF
    Feature selection is an important and active issue in clustering and classification problems. By choosing an adequate feature subset, a dataset dimensionality reduction is allowed, thus contributing to decreasing the classification computational complexity, and to improving the classifier performance by avoiding redundant or irrelevant features. Although feature selection can be formally defined as an optimisation problem with only one objective, that is, the classification accuracy obtained by using the selected feature subset, in recent years, some multi-objective approaches to this problem have been proposed. These either select features that not only improve the classification accuracy, but also the generalisation capability in case of supervised classifiers, or counterbalance the bias toward lower or higher numbers of features that present some methods used to validate the clustering/classification in case of unsupervised classifiers. The main contribution of this paper is a multi-objective approach for feature selection and its application to an unsupervised clustering procedure based on Growing Hierarchical Self-Organizing Maps (GHSOM) that includes a new method for unit labelling and efficient determination of the winning unit. In the network anomaly detection problem here considered, this multi-objective approach makes it possible not only to differentiate between normal and anomalous traffic but also among different anomalies. The efficiency of our proposals has been evaluated by using the well-known DARPA/NSL-KDD datasets that contain extracted features and labeled attacks from around 2 million connections. The selected feature sets computed in our experiments provide detection rates up to 99.8% with normal traffic and up to 99.6% with anomalous traffic, as well as accuracy values up to 99.12%.This work has been funded by FEDER funds and the Ministerio de Ciencia e Innovación of the Spanish Government under Project No. TIN2012-32039

    Intrusion detection model in network systems, making feature selection with fdr and classification-training stages with s

    Get PDF
    Los Sistemas de Detección de Intrusos (IDS, por sus siglas en inglés) comerciales actuales clasifican el tráfico de red, detectando conexiones normales e intrusiones, mediante la aplicación de métodos basados en firmas; ello conlleva problemas pues solo se detectan intrusiones previamente conocidas y existe desactualización periódica de la base de datos de firmas. En este artículo se evalúa la eficiencia de un modelo de detección de intrusiones de red propuesto, utilizando métricas de sensibilidad y especificidad, mediante un proceso de simulación que emplea el dataset NSL-KDD DARPA, seleccionando de éste las características más relevantes con FDR y entrenando una red neuronal que haga uso de un algoritmo de aprendizaje no supervisado basado en mapas auto-organizativos, con el propósito de clasificar el tráfico de la red en conexiones normales y ataques, de forma automática. La simulación generó métricas de sensibilidad del 99,69% y de especificidad del 56,15% utilizando 20 y 15 características, respectivamenteCurrent commercial IDSs classify network traffic, detecting both intrusions and normal con-nections by applying signature-based methods. This leads to problems since only intrusion detection previously known is detected and signature database is periodically outdated. This paper evaluates the efficiency of a proposed network intrusion detection model, using sen-sitivity and specificity metrics through a simulation process that uses the dataset NSL-KDD DARPA, selecting from this, the most relevant features with FDR and training a neural net-work that makes use of an unsupervised learning algorithm based on SOMs, in order to au-tomatically classify network’s traffic into normal and attack connections. Metrics generated by simulation were: sensitivity 99.69% and specificity 56.15%, using 20 and 15 features respectivel
    corecore